00:00.0 Host bridge: Intel Corporation Haswell-ULT DRAM Controller (rev 0b) 00:02.0 VGA compatible controller: Intel Corporation Haswell-ULT Integrated Graphics Controller (rev 0b) 00:03.0 Audio device: Intel Corporation Haswell-ULT HD Audio Controller (rev 0b) 00:14.0 USB controller: Intel Corporation Lynx Point-LP USB xHCI HC (rev 04) 00:16.0 Communication controller: Intel Corporation Lynx Point-LP HECI #0 (rev 04) 00:16.3 Serial controller: Intel Corporation Lynx Point-LP HECI KT (rev 04) 00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I218-LM (rev 04) 00:1b.0 Audio device: Intel Corporation Lynx Point-LP HD Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 6 (rev e4) 00:1c.1 PCI bridge: Intel Corporation Lynx Point-LP PCI Express Root Port 3 (rev e4) 00:1d.0 USB controller: Intel Corporation Lynx Point-LP USB EHCI #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation Lynx Point-LP LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation Lynx Point-LP SATA Controller 1 [AHCI mode] (rev 04) 00:1f.3 SMBus: Intel Corporation Lynx Point-LP SMBus Controller (rev 04)
I had to dump the eltorito image from the ISO they provide, after that I was able to dd the resulting image to a flash drive and the bios update went well, no cdrom needed.I updated to version 1.13 of the BIOS which fixes the suspend/resume bug. By the time you read this, there may be newer versions that fix other things so check the Lenovo website.
Open file /usr/share/X11/xorg.conf.d/50-synaptics.conf for edit.
Find Section InputClass which the following line is Identifier Default clickpad buttons .
Edit option for SoftButtonAreas to values 64% 0 1 42% 36% 64% 1 42%, this is size of the right and middle button.
Enable option AreaBottomEdge and change value to 1, this will disable touchpad movement.
If everything done right, your class should looks like:
Section "InputClass" Identifier "Default clickpad buttons" MatchDriver "synaptics" Option "SoftButtonAreas" "64% 0 1 42% 36% 64% 1 42%" Option "AreaBottomEdge" "1" EndSectionEssentially, the first Option line will create a middle button that is 32% of the width and 42% of the height, and a right button that is 32% of the width and 42% of the height. The synaptics manpage (man synpatics) will give you more detail on the general way this works. Of course, something does feel very wrong about editing a file in /usr/share.
330947b save and restore adaptive keyboard mode for suspend and,resume 3a9d20b support Thinkpad X1 Carbon 2nd generation's adaptive keyboardAlthough this is not supported in Debian testing at the time of writing, a bug was filed in Debian and quickly fixed by Ben Hutchings in Debian kernel version 3.14.2-1 which is currently in sid/unstable. As a result, if you install the latest version kernel from Debian unstable (3.14.2-1 or later), the adaptive keyboard just works. If you aren t using Debian and if kernel you are using does not have support, you might be patching your kernel.
Many new Thinkpad laptops have a dock (Thinkpad OneLink Dock) containing a usb ethernet chip that is supported by the ax88179 driver. However its USB ID is not included in the driver shipped with the 3.13 kernel used in Trusty. A patch to add this ID has been sent to the LKML (see https://lkml.org/lkml/2014/2/24/649 ) and it would be very convenient for all users of the dock if it could be applied to the Trusty kernel.If your kernel does not support the USB Ethernet device in the dock, and a newer kernel doesn t fix it, the patch is straightforward.
Republished by Slate. Translations available in French (Fran ais), Spanish (Espa ol), Chinese ( ) For almost 15 years, I have run my own email server which I use for all of my non-work correspondence. I do so to keep autonomy, control, and privacy over my email and so that no big company has copies of all of my personal email. A few years ago, I was surprised to find out that my friend Peter Eckersley a very privacy conscious person who is Technology Projects Director at the EFF used Gmail. I asked him why he would willingly give Google copies of all his email. Peter pointed out that if all of your friends use Gmail, Google has your email anyway. Any time I email somebody who uses Gmail and anytime they email me Google has that email. Since our conversation, I have often wondered just how much of my email Google really has. This weekend, I wrote a small program to go through all the email I have kept in my personal inbox since April 2004 (when Gmail was started) to find out. One challenge with answering the question is that many people, like Peter, use Gmail to read, compose, and send email but they configure Gmail to send email from a non-gmail.com From address. To catch these, my program looks through each message s headers that record which computers handled the message on its way to my server and to pick out messages that have traveled through google.com, gmail.com, or googlemail.com. Although I usually filter them, my personal mailbox contains emails sent through a number of mailing lists. Since these mailing lists often hide the true provenance of a message, I exclude all messages that are marked as coming from lists using the (usually invisible) Precedence header. The following graph shows the numbers of emails in my personal inbox each week in red and the subset from Google in blue. Because the number of emails I receive week-to-week tends to vary quite a bit, I ve included a LOESS smoother which shows a moving average over several weeks. From eyeballing the graph, the answer to seems to be that, although it varies, about a third of the email in my inbox comes from Google! Keep in mind that this is all of my personal email and includes automatic and computer generated mail from banks and retailers, etc. Although it is true that Google doesn t have these messages, it suggests that the proportion of my truly personal email that comes via Google is probably much higher. I would also like to know how much of the email I send goes to Google. I can do this by looking at emails in my inbox that I have replied to. This works if I am willing to assume that if I reply to an email sent from Google, it ends up back at Google. In some ways, doing this addresses the problem with the emails from retailers and banks since I am very unlikely to reply to those emails. In this sense, it also reflects a measure of more truly personal email. I ve broken down the proportions of emails I received that come from Google in the graph below for all email (top) and for emails I have replied to (bottom). In the graphs, the size of the dots represents the total number of emails counted to make that proportion. Once again, I ve included the LOESS moving average. The answer is surprisingly large. Despite the fact that I spend hundreds of dollars a year and hours of work to host my own email server, Google has about half of my personal email! Last year, Google delivered 57% of the emails in my inbox that I replied to. They have delivered more than a third of all the email I ve replied to every year since 2006 and more than half since 2010. On the upside, there is some indication that the proportion is going down. So far this year, only 51% of the emails I ve replied to arrived from Google. The numbers are higher than I imagined and reflect somewhat depressing news. They show how it s complicated to think about privacy and autonomy for communication between parties. I m not sure what to do except encourage others to consider, in the wake of the Snowden revelations and everything else, whether you really want Google to have all your email. And half of mine. If you want to run the analysis on your own, you re welcome to the Python and R code I used to produce the numbers and graphs.
I create my local passphrase using pwget 50 or similar, but any sensible way to create a fairly random password should do it. Armed with these details, it is now time to run mkfs, entering the API details and password to create it:[s3c] storage-url: s3c://s.greenqloud.com:443/bucket-name backend-login: API-login backend-password: API-password fs-passphrase: local-password
The next step is mounting the file system to make the storage available.# mkdir -m 700 /var/lib/s3ql-cache # mkfs.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl s3c://s.greenqloud.com:443/bucket-name Enter backend login: Enter backend password: Before using S3QL, make sure to read the user's guide, especially the 'Important Rules to Avoid Loosing Data' section. Enter encryption password: Confirm encryption password: Generating random encryption key... Creating metadata tables... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.00 MB of compressed metadata. #
The file system is now ready for use. I use rsync to store my backups in it, and as the metadata used by rsync is downloaded at mount time, no network traffic (and storage cost) is triggered by running rsync. To unmount, one should not use the normal umount command, as this will not flush the cache to the cloud storage, but instead running the umount.s3ql command like this:# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 4 upload threads. Downloading and decompressing metadata... Reading metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Mounting filesystem... # df -h /s3ql Filesystem Size Used Avail Use% Mounted on s3c://s.greenqloud.com:443/bucket-name 1.0T 0 1.0T 0% /s3ql #
There is a fsck command available to check the file system and correct any problems detected. This can be used if the local server crashes while the file system is mounted, to reset the "already mounted" flag. This is what it look like when processing a working file system:# umount.s3ql /s3ql #
Thanks to the cache, working on files that fit in the cache is very quick, about the same speed as local file access. Uploading large amount of data is to me limited by the bandwidth out of and into my house. Uploading 685 MiB with a 100 MiB cache gave me 305 kiB/s, which is very close to my upload speed, and downloading the same Debian installation ISO gave me 610 kiB/s, close to my download speed. Both were measured using dd. So for me, the bottleneck is my network, not the file system code. I do not know what a good cache size would be, but suspect that the cache should e larger than your working set. I mentioned that only one machine can mount the file system at the time. If another machine try, it is told that the file system is busy:# fsck.s3ql --force --ssl s3c://s.greenqloud.com:443/bucket-name Using cached metadata. File system seems clean, checking anyway. Checking DB integrity... Creating temporary extra indices... Checking lost+found... Checking cached objects... Checking names (refcounts)... Checking contents (names)... Checking contents (inodes)... Checking contents (parent inodes)... Checking objects (reference counts)... Checking objects (backend)... ..processed 5000 objects so far.. ..processed 10000 objects so far.. ..processed 15000 objects so far.. Checking objects (sizes)... Checking blocks (referenced objects)... Checking blocks (refcounts)... Checking inode-block mapping (blocks)... Checking inode-block mapping (inodes)... Checking inodes (refcounts)... Checking inodes (sizes)... Checking extended attributes (names)... Checking extended attributes (inodes)... Checking symlinks (inodes)... Checking directory reachability... Checking unix conventions... Checking referential integrity... Dropping temporary indices... Backing up old metadata... Dumping metadata... ..objects.. ..blocks.. ..inodes.. ..inode_blocks.. ..symlink_targets.. ..names.. ..contents.. ..ext_attributes.. Compressing and uploading metadata... Wrote 0.89 MB of compressed metadata. #
The file content is uploaded when the cache is full, while the metadata is uploaded once every 24 hour by default. To ensure the file system content is flushed to the cloud, one can either umount the file system, or ask S3QL to flush the cache and metadata using s3qlctrl:# mount.s3ql --cachedir /var/lib/s3ql-cache --authfile /root/.s3ql/authinfo2 \ --ssl --allow-root s3c://s.greenqloud.com:443/bucket-name /s3ql Using 8 upload threads. Backend reports that fs is still mounted elsewhere, aborting. #
If you are curious about how much space your data uses in the cloud, and how much compression and deduplication cut down on the storage usage, you can use s3qlstat on the mounted file system to get a report:# s3qlctrl upload-meta /s3ql # s3qlctrl flushcache /s3ql #
I mentioned earlier that there are several possible suppliers of storage. I did not try to locate them all, but am aware of at least Greenqloud, Google Drive, Amazon S3 web serivces, Rackspace and Crowncloud. The latter even accept payment in Bitcoin. Pick one that suit your need. Some of them provide several GiB of free storage, but the prize models are quite different and you will have to figure out what suits you best. While researching this blog post, I had a look at research papers and posters discussing the S3QL file system. There are several, which told me that the file system is getting a critical check by the science community and increased my confidence in using it. One nice poster is titled "An Innovative Parallel Cloud Storage System using OpenStack s SwiftObject Store and Transformative Parallel I/O Approach" by Hsing-Bung Chen, Benjamin McClelland, David Sherrill, Alfred Torrez, Parks Fields and Pamela Smith. Please have a look. Given my problems with different file systems earlier, I decided to check out the mounted S3QL file system to see if it would be usable as a home directory (in other word, that it provided POSIX semantics when it come to locking and umask handling etc). Running my test code to check file system semantics, I was happy to discover that no error was found. So the file system can be used for home directories, if one chooses to do so. If you do not want a locally file system, and want something that work without the Linux fuse file system, I would like to mention the Tarsnap service, which also provide locally encrypted backup using a command line client. It have a nicer access control system, where one can split out read and write access, allowing some systems to write to the backup and others to only read from it. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.# s3qlstat /s3ql Directory entries: 9141 Inodes: 9143 Data blocks: 8851 Total data size: 22049.38 MB After de-duplication: 21955.46 MB (99.57% of total) After compression: 21877.28 MB (99.22% of total, 99.64% of de-duplicated) Database size: 2.39 MB (uncompressed) (some values do not take into account not-yet-uploaded dirty blocks in cache) #
On purpose | By mistake | |
---|---|---|
Good | Feature | |
Bad | Bug |
On purpose | By mistake | |
---|---|---|
Good | Feature | |
Bad | Antifeature | Bug |
On purpose | By mistake | |
---|---|---|
Good | Feature | Antibug |
Bad | Antifeature | Bug |
Next.